Deep Neural Networks have been widely used in many fields. However, studies have shown that DNNs are easily attacked by adversarial examples, which have tiny perturbations and greatly mislead the correct judgment of DNNs. Furthermore, even if malicious attackers cannot obtain all the underlying model parameters, they can use adversarial examples to attack various DNN-based task systems. Researchers have proposed various defense methods to protect DNNs, such as reducing the aggressiveness of adversarial examples by preprocessing or improving the robustness of the model by adding modules. However, some defense methods are only effective for small-scale examples or small perturbations but have limited defense effects for adversarial examples with large perturbations. This paper assigns different defense strategies to adversarial perturbations of different strengths by grading the perturbations on the input examples. Experimental results show that the proposed method effectively improves defense performance. In addition, the proposed method does not modify any task model, which can be used as a preprocessing module, which significantly reduces the deployment cost in practical applications.
translated by 谷歌翻译
As acquiring manual labels on data could be costly, unsupervised domain adaptation (UDA), which transfers knowledge learned from a rich-label dataset to the unlabeled target dataset, is gaining increasing popularity. While extensive studies have been devoted to improving the model accuracy on target domain, an important issue of model robustness is neglected. To make things worse, conventional adversarial training (AT) methods for improving model robustness are inapplicable under UDA scenario since they train models on adversarial examples that are generated by supervised loss function. In this paper, we present a new meta self-training pipeline, named SRoUDA, for improving adversarial robustness of UDA models. Based on self-training paradigm, SRoUDA starts with pre-training a source model by applying UDA baseline on source labeled data and taraget unlabeled data with a developed random masked augmentation (RMA), and then alternates between adversarial target model training on pseudo-labeled target data and finetuning source model by a meta step. While self-training allows the direct incorporation of AT in UDA, the meta step in SRoUDA further helps in mitigating error propagation from noisy pseudo labels. Extensive experiments on various benchmark datasets demonstrate the state-of-the-art performance of SRoUDA where it achieves significant model robustness improvement without harming clean accuracy. Code is available at https://github.com/Vision.
translated by 谷歌翻译
Deep neural networks have strong capabilities of memorizing the underlying training data, which can be a serious privacy concern. An effective solution to this problem is to train models with differential privacy, which provides rigorous privacy guarantees by injecting random noise to the gradients. This paper focuses on the scenario where sensitive data are distributed among multiple participants, who jointly train a model through federated learning (FL), using both secure multiparty computation (MPC) to ensure the confidentiality of each gradient update, and differential privacy to avoid data leakage in the resulting model. A major challenge in this setting is that common mechanisms for enforcing DP in deep learning, which inject real-valued noise, are fundamentally incompatible with MPC, which exchanges finite-field integers among the participants. Consequently, most existing DP mechanisms require rather high noise levels, leading to poor model utility. Motivated by this, we propose Skellam mixture mechanism (SMM), an approach to enforce DP on models built via FL. Compared to existing methods, SMM eliminates the assumption that the input gradients must be integer-valued, and, thus, reduces the amount of noise injected to preserve DP. Further, SMM allows tight privacy accounting due to the nice composition and sub-sampling properties of the Skellam distribution, which are key to accurate deep learning with DP. The theoretical analysis of SMM is highly non-trivial, especially considering (i) the complicated math of differentially private deep learning in general and (ii) the fact that the mixture of two Skellam distributions is rather complex, and to our knowledge, has not been studied in the DP literature. Extensive experiments on various practical settings demonstrate that SMM consistently and significantly outperforms existing solutions in terms of the utility of the resulting model.
translated by 谷歌翻译
Recently, neural network based methods have shown their power in learning more expressive features on the task of knowledge graph embedding (KGE). However, the performance of deep methods often falls behind the shallow ones on simple graphs. One possible reason is that deep models are difficult to train, while shallow models might suffice for accurately representing the structure of the simple KGs. In this paper, we propose a neural network based model, named DeepE, to address the problem, which stacks multiple building blocks to predict the tail entity based on the head entity and the relation. Each building block is an addition of a linear and a non-linear function. The stacked building blocks are equivalent to a group of learning functions with different non-linear depth. Hence, DeepE allows deep functions to learn deep features, and shallow functions to learn shallow features. Through extensive experiments, we find DeepE outperforms other state-of-the-art baseline methods. A major advantage of DeepE is the robustness. DeepE achieves a Mean Rank (MR) score that is 6%, 30%, 65% lower than the best baseline methods on FB15k-237, WN18RR and YAGO3-10. Our design makes it possible to train much deeper networks on KGE, e.g. 40 layers on FB15k-237, and without scarifying precision on simple relations.
translated by 谷歌翻译
Binaural rendering of ambisonic signals is of broad interest to virtual reality and immersive media. Conventional methods often require manually measured Head-Related Transfer Functions (HRTFs). To address this issue, we collect a paired ambisonic-binaural dataset and propose a deep learning framework in an end-to-end manner. Experimental results show that neural networks outperform the conventional method in objective metrics and achieve comparable subjective metrics. To validate the proposed framework, we experimentally explore different settings of the input features, model structures, output features, and loss functions. Our proposed system achieves an SDR of 7.32 and MOSs of 3.83, 3.58, 3.87, 3.58 in quality, timbre, localization, and immersion dimensions.
translated by 谷歌翻译
在多代理系统中,植入是一个非常具有挑战性的问题。传统的羊群方法还需要完全了解环境和控制模型。在本文中,我们建议在羊群任务中进化多代理增强学习(EMARL),这是一种混合算法,将合作和竞争与很少的先验知识相结合。至于合作,我们根据BOIDS模型设计了代理商对羊群任务的奖励。在竞争中,具有高健身的代理商被设计为高级代理商,并且那些健身较低的代理商被设计为初中,让初级代理商随机继承了高级代理人的参数。为了加强竞争,我们还设计了一种进化选择机制,该机制在羊群任务中显示出对信用分配的有效性。一系列具有挑战性和自我对比的基准测试的实验结果表明,EMARL显着超过了完整的竞争或合作方法。
translated by 谷歌翻译
最近的作品显示了深度学习模型在词汇(IV)场景文本识别中的巨大成功。但是,在现实情况下,播音外(OOV)单词非常重要,SOTA识别模型通常在OOV设置上表现较差。受到直觉的启发,即学习的语言先验有限的OOV预言性,我们设计了一个名为Vision语言自适应相互解码器(VLAMD)的框架,以部分解决OOV问题。 VLAMD由三个主要谱系组成。首先,我们建立了一个基于注意力的LSTM解码器,具有两个适应性合并的仅视觉模块,可产生视觉平衡的主分支。其次,我们添加了一个基于辅助查询的自动回归变压器解码头,以进行通用的视觉和语言先验表示学习。最后,我们将这两种设计与双向培训相结合,以进行更多样化的语言建模,并进行相互的顺序解码以获得强烈的结果。我们的方法在IV+OOV和OOV设置上分别实现了70.31 \%和59.61 \%单词的准确性,分别在ECCV 2022 TIE TIE Workshop上的OOV-ST挑战的裁剪单词识别任务上,我们在这两个设置上都获得了第一名。
translated by 谷歌翻译
任意形状的文本检测是一项具有挑战性的任务,这是由于大小和宽高比,任意取向或形状,不准确的注释等各种变化的任务。最近引起了大量关注。但是,文本的准确像素级注释是强大的,现有的场景文本检测数据集仅提供粗粒的边界注释。因此,始终存在大量错误分类的文本像素或背景像素,从而降低基于分割的文本检测方法的性能。一般来说,像素是否属于文本与与相邻注释边界的距离高度相关。通过此观察,在本文中,我们通过概率图提出了一种创新且可靠的基于分割的检测方法,以准确检测文本实例。为了具体,我们采用Sigmoid alpha函数(SAF)将边界及其内部像素之间的距离传输到概率图。但是,由于粗粒度文本边界注释的不确定性,一个概率图无法很好地覆盖复杂的概率分布。因此,我们采用一组由一系列Sigmoid alpha函数计算出的概率图来描述可能的概率分布。此外,我们提出了一个迭代模型,以学习预测和吸收概率图,以提供足够的信息来重建文本实例。最后,采用简单的区域生长算法来汇总概率图以完成文本实例。实验结果表明,我们的方法在几个基准的检测准确性方面实现了最先进的性能。
translated by 谷歌翻译
情感识别技术使计算机能够将人类情感状态分类为离散类别。但是,即使在短时间内,情绪也可能波动,而不是保持稳定状态。由于其3-D拓扑结构,也很难全面使用EEG空间分布。为了解决上述问题,我们在本研究中提出了一个本地时间空间模式学习图表网络(LTS-GAT)。在LTS-GAT中,使用划分和串扰方案来检查基于图形注意机制的脑电图模式的时间和空间维度的局部信息。添加了动力域歧视器,以提高针对脑电图统计数据的个体间变化的鲁棒性,以学习不同参与者的鲁棒性脑电图特征表示。我们在两个公共数据集上评估了LTS-GAT,用于在个人依赖和独立范式下进行情感计算研究。与其他现有主流方法相比,LTS-GAT模型的有效性被证明。此外,使用可视化方法来说明不同大脑区域和情绪识别的关系。同时,还对不同时间段的权重进行了可视化,以研究情绪稀疏问题。
translated by 谷歌翻译
基于图形的模型最近在人的重新识别任务中取得了巨大的成功,该任务首先计算了不同人之间的图形拓扑结构(亲和力),然后将信息传递给他们的信息以实现更强的功能。但是,我们在可见的红外人员重新识别任务(VI-REID)中发现了现有的基于图的方法,因为有两个问题:1)火车测试模式平衡差距,这是VI-REID任务的属性。两个模式数据的数量在训练阶段平衡,但推理极为不平衡,导致基于图的VI-REID方法的概括较低。 2)由图形模块的端到端学习方式引起的亚最佳拓扑结构。我们分析训练有素的输入特征会削弱图形拓扑的学习,从而使其在推理过程中不够概括。在本文中,我们提出了一种反事实干预特征转移(CIFT)方法来解决这些问题。具体而言,均匀和异质的特征转移(H2FT)旨在通过两种独立的设计的图形模块和不平衡的场景模拟来减少火车测试模态差距。此外,提出了反事实关系干预(CRI)来利用反事实干预和因果效应工具来突出拓扑结构在整个训练过程中的作用,这使图形拓扑结构更加可靠。对标准VI-REID基准测试的广泛实验表明,CIFT在各种设置下都优于最新方法。
translated by 谷歌翻译